133 research outputs found

    Robust cue integration: a Bayesian model and evidence from cueconflict studies with stereoscopic and figure cues to slant.

    Get PDF
    Most research on depth cue integration has focused on stimulus regimes in which stimuli contain the small cue conflicts that one might expect to normally arise from sensory noise. In these regimes, linear models for cue integration provide a good approximation to system performance. This article focuses on situations in which large cue conflicts can naturally occur in stimuli. We describe a Bayesian model for nonlinear cue integration that makes rational inferences about scenes across the entire range of possible cue conflicts. The model derives from the simple intuition that multiple properties of scenes or causal factors give rise to the image information associated with most cues. To make perceptual inferences about one property of a scene, an ideal observer must necessarily take into account the possible contribution of these other factors to the information provided by a cue. In the context of classical depth cues, large cue conflicts most commonly arise when one or another cue is generated by an object or scene that violates the strongest form of constraint that makes the cue informative. For example, when binocularly viewing a slanted trapezoid, the slant interpretation of the figure derived by assuming that the figure is rectangular may conflict greatly with the slant suggested by stereoscopic disparities. An optimal Bayesian estimator incorporates the possibility that different constraints might apply to objects in the world and robustly integrates cues with large conflicts by effectively switching between different internal models of the prior constraints underlying one or both cues. We performed two experiments to test the predictions of the model when applied to estimating surface slant from binocular disparities and the compression cue (the aspect ratio of figures in an image). The apparent weight that subjects gave to the compression cue decreased smoothly as a function of the conflict between the cues but did not shrink to zero; that is, subjects did not fully veto the compression cue at large cue conflicts. A Bayesian model that assumes a mixed prior distribution of figure shapes in the world, with a large proportion being very regular and a smaller proportion having random shapes, provides a good quantitative fit for subjects' performance. The best fitting model parameters are consistent with the sensory noise to be expected in measurements of figure shape, further supporting the Bayesian model as an account of robust cue integration

    Cue integration outside central fixation: A study of grasping in depth.

    Get PDF
    We assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected. We measured how subjects integrated these cues to orient their hands when grasping oriented discs at different eccentricities and distances from the horopter. Subjects relied increasingly less on binocular disparity as targets' retinal eccentricity and distance from the horopter increased. The measured cue influences were consistent with what would be predicted from the relative cue reliabilities at the various target locations. Our results showed that relative reliability affects how cues influence motor control and that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions. Keywords: binocular vision, spatial vision, 3D surface and shape perception, grasping, cue integration Citation: Introduction Most conclusions about visual perception have been based on foveal vision since this is where visual acuity and thus performance on most tasks is best, and it is well established that stereopsis contributes to perception and motor control when stimuli are in the central portion of the visual field. However, peripheral regions of the visual field also significantly impact how we navigate through and interact with the world. Information from the periphery is particularly important for planning and executing reaching movements. It helps us plan both the saccades that will move the eyes so that the desired objects project onto the foveae and the reaching movements themselves Surprisingly few studies have focused on stereoacuity, the ability to use binocular disparity as a depth cue, away from the fovea, although it is agreed that thresholds for stereopsis increase with retinal eccentricity. This decrease in sensitivity appears to reflect decreases in the amount of cortical representation in the periphery rather than the visual angle per se Journal of Vision We tested these predictions in three experiments that required human subjects to use monocular and binocular information to estimate the 3D orientations of stimuli at different retinal eccentricities and distances from the horopter. Our first experiment separately measured monocular and binocular thresholds for 3D orientation discrimination at different retinal eccentricities along the theoretical horopter. Then, we used a grasping task to quantify how subjects integrated monocular and binocular information about 3D orientation at these same positions and compared the cue integration strategies we observed with those predicted by sensitivity to the individual cues at each retinal location. In Experiment 3, we investigated Journal of Vision (2009) 9(2):11, 1-16 Greenwald & Knill 2 how increasing the targets' distance from the theoretical horopter affected the contribution of stereopsis to subjects' 3D orientation estimates. Experiment 1: Eccentric monocular and binocular slant thresholds We separately measured 3D orientation thresholds from aspect ratio, a monocular cue, and disparity, a binocular cue, at the fixation point and at two points in the periphery. This enabled us to predict how the relative influences of the cues should change as a function of eccentricity. Method Subjects The ten subjects in this experiment were laboratory staff, graduate students, or postdoctoral fellows in the Department of Brain & Cognitive Sciences and/or the Center for Visual Science at the University of Rochester. All subjects had normal or corrected-to-normal vision and binocular acuity of at least 40 arc seconds, provided written informed consent, and were paid /10 per hour. We used experienced psychophysical observers to obtain the best possible threshold estimates; although they were aware that the purpose of the experiment was to estimate psychophysical thresholds, they were not informed of the details of the staircases we used or of our hypotheses. All experiments reported here followed protocols specified by the University of Rochester Research Subjects Review Board. Apparatus Participants viewed a 20 in. display (1152 Ă‚ 864 resolution, 118 Hz refresh rate) through a half-silvered mirror as shown in Journal of Vision (2009) 9(2):11, 1-16 Greenwald & Knill 3 and we occluded their left eye with a patch during monocular trials. Each subject viewed the monocular stimuli with their right eye because stimuli appeared to the right of fixation and Calibration procedures We first identified the locations of each subject's eyes relative to the monitor. At the beginning of each session, the backing of the half-silvered mirror was removed, which allowed subjects to see the monitor and their hand simultaneously. Subjects positioned an infrared marker at a series of visually cued locations so that the marker and a symbol presented monocularly on the monitor appeared to be aligned. Thirteen positions were matched for each eye at two different depth planes, and we calculated the 3D position of each eye relative to the center of the display by minimizing the squared error between the measured position of the marker and the position we predicted from the estimated eye locations. Subjects then moved the marker around the workspace and confirmed that a symbol presented binocularly in depth appeared at the same location as the infrared marker. To calibrate the eyetracker, we recorded the positions of both eyes for binocular conditions or the right eye for monocular conditions as subjects fixated points in a 3 Ă‚ 3 grid displayed on the screen. The eyetracker was calibrated at the start of each experimental block and after subjects removed their head from the chinrest, and drift corrections were performed after every five fixation losses or as needed. Fixation losses occurred when subjects looked away from the fixation target or when their measured eye positions drifted significantly from the calibrated positions. Stimuli The stimulus in monocular trials (see In both conditions, a red wireframe sphere (RGB = (0.8,0,0)) with a diameter of 1 cm (approximately 1-of visual angle) served as a fixation target. It appeared 4 cm to the left and 8 cm below the center of the display and 4 cm behind the accommodative plane of the display so that stimuli were near this plane at all retinal eccentricitie

    Optimal and Efficient Decoding of Concatenated Quantum Block Codes

    Get PDF
    We consider the problem of optimally decoding a quantum error correction code -- that is to find the optimal recovery procedure given the outcomes of partial "check" measurements on the system. In general, this problem is NP-hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message passing algorithm. We compare the performance of the message passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the 5 qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message passing algorithms in two respects. 1) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel. 2) For noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.Comment: Published versio

    Classical simulation of noninteracting-fermion quantum circuits

    Get PDF
    We show that a class of quantum computations that was recently shown to be efficiently simulatable on a classical computer by Valiant corresponds to a physical model of noninteracting fermions in one dimension. We give an alternative proof of his result using the language of fermions and extend the result to noninteracting fermions with arbitrary pairwise interactions, where gates can be conditioned on outcomes of complete von Neumann measurements in the computational basis on other fermionic modes in the circuit. This last result is in remarkable contrast with the case of noninteracting bosons where universal quantum computation can be achieved by allowing gates to be conditioned on classical bits (quant-ph/0006088).Comment: 26 pages, 1 figure, uses wick.sty; references added to recent results by E. Knil

    Hiding bits in Bell states

    Get PDF
    We present a scheme for hiding bits in Bell states that is secure even when the sharers Alice and Bob are allowed to carry out local quantum operations and classical communication. We prove that the information that Alice and Bob can gain about a hidden bit is exponentially small in nn, the number of qubits in each share, and can be made arbitrarily small for hiding multiple bits. We indicate an alternative efficient low-entanglement method for preparing the shared quantum states. We discuss how our scheme can be implemented using present-day quantum optics.Comment: 4 pages RevTex, 1 figure, various small changes and additional paragraph on optics implementatio

    Testing integrability with a single bit of quantum information

    Get PDF
    We show that deterministic quantum computing with a single bit (DQC1) can determine whether the classical limit of a quantum system is chaotic or integrable using O(N) physical resources, where NN is the dimension of the Hilbert space of the system under study. This is a square root improvement over all known classical procedures. Our study relies strictly on the random matrix conjecture. We also present numerical results for the nonlinear kicked top.Comment: Minor changes taking into account Howard Wiseman's comment: quant-ph/0305153. Accepted for publication in Phys. Rev.

    Quantum Channel Capacity of Very Noisy Channels

    Full text link
    We present a family of additive quantum error-correcting codes whose capacities exceeds that of quantum random coding (hashing) for very noisy channels. These codes provide non-zero capacity in a depolarizing channel for fidelity parameters ff when f>.80944f> .80944. Random coding has non-zero capacity only for f>.81071f>.81071; by analogy to the classical Shannon coding limit, this value had previously been conjectured to be a lower bound. We use the method introduced by Shor and Smolin of concatenating a non-random (cat) code within a random code to obtain good codes. The cat code with block size five is shown to be optimal for single concatenation. The best known multiple-concatenated code we found has a block size of 25. We derive a general relation between the capacity attainable by these concatenation schemes and the coherent information of the inner code states.Comment: 31 pages including epsf postscript figures. Replaced to correct important typographical errors in equations 36, 37 and in tex
    • …
    corecore